News

Trust and Ethics in Technology Part 2: Military Application of AI - Bristol Technology Festival 2019
08-11-2019

This week, our student engineer Luke attended a talk on Trust and Ethics in Technology, held at the Bristol Museum and Art Gallery. This was part of the Bristol Technology Festival, held in early November. The event was hosted by Lanciaconsult, under the banner of #beingrelevant2019 (see their site).

"The second lecture of note was the "Military Application of Artificial Intelligence - A Matter of Human Dignity?". The title alone piqued my interest, and so did the speaker. Mark Phelps, an RAF lawyer currently writing his PhD started his speech by saying that some may disagree with his viewpoint on the use of AI, but that it was an important conversation to have going into the future.

The issue, or rather the future, is autonomous weapons systems used by the armed forces, in many countries across the world. As of their first use in 2001, there has never been a fully autonomous weapon system used with human interaction or intervention. They are also a long way off being used in the battlefield, let alone any other practical military application.

Though it's not a matter of if, but when. The AI algorithms that power these weapons will have to adhere to certain values:

      Trusting the AI, meeting ethics
      Following just outcomes
      Military and societal values

Machine learning is already being applied in current and emergent technologies such as self-driving cars, Medical equipment, and even streaming services like Netflix.

These are, he made the point of saying, all positively looked upon things. So why not weapon systems? It seems obvious; harming and killing others is something we don't wish to do. So then, why not let the machine do it?

Not only does this raise many ethical issues, it violates concept and conventions of warfare. With no human interaction, can it be called warfare?


Source: PATUXENT RIVER, Md. (April 22, 2015) The Navy's unmanned X-47B receives fuel from an Omega K-707 tanker while operating in the Atlantic Test Ranges over the Chesapeake Bay.
This test marked the first time an unmanned aircraft refueled in flight. (U.S. Navy photo/Released)


Though digging deeper, valid benefits began to arise of machine learning in warfare. Not necessarily for aircraft or tanks, but for people. Surveillance to ensure that intelligence is accurate, to aid in decision making and to minimise casualties by plotting alternate routes/stratagems or to process data intelligently to give commanders on the battlefield the ability to make safer and more informed decisions for their troops.
He then referenced in fact an organisation called CaKR, Campaign against Killer Robots.

Whilst this raised some giggles from the crowd, his point was that instead of burying our heads in the sand and not developing this technology, that we should in fact accelerate our production of it. "Regardless, this will exist one day."

Though he was quick to point out that, whilst in support of this, there would have to be much consideration and implementation. Entire rewriting of existing conventions to accommodate the new and unfamiliar scenarios that we would encounter.

Surrendering to an autonomous vehicle for instance. What kind of decision would it take: would it take prisoners, as is current Geneva Convention, or would it deem the targets too high a risk and kill them, knowing it could be saving it's unit in the future. Whilst these machines are designed to save the lives of the troops of who's side they're on, they risk running the paradox of a riskless war. The AI must operate as a moral agent of sorts, only with the mutual assumption of risk can it carry out that duty to protect it's own country.

The paradox lies then, in removing the human aspect of killing, it becomes effortless and easy. More lives may be harmed or spent against unremorseful weapons systems than would be saved.


Source: (February, 2018) Modular Advanced Armed Robotic Systems, or MAARK.
(U.S. DoD photo/Released)

The fact ultimately comes down to a single, difficult to answer question: Do humans deserve to be killed by humans? We alone have the ability to judge frailty, to render judgment based on the context and the experiences gained in war.

Machines can't weigh value. A human life isn't what the person is fiscally worth, or their talents or their actions, it's an unfathomable sense that even most humans could not, or would not wish to answer.
To be considered able to carry out any sort of duty, machines would need to have value, and therefore dignity. How would a machine gain this?

Despite this, if the machine were to gain such things, what would stop it from waging a war of annihilation against the enemy?
If the logic is:

        
          if (livesSaved > livesExtinguished){
            commenceNuclearStrike();
          }
        
      

then why shouldn't the machine make that choice? After all, in this context, it has dignity. It has value. Doesn't it?"